Prevalent techniques in zero-shot learning do not generalize well to otherrelated problem scenarios. Here, we present a unified approach for conventionalzero-shot, generalized zero-shot and few-shot learning problems. Our approachis based on a novel Class Adapting Principal Directions (CAPD) concept thatallows multiple embeddings of image features into a semantic space. Given animage, our method produces one principal direction for each seen class. Then,it learns how to combine these directions to obtain the principal direction foreach unseen class such that the CAPD of the test image is aligned with thesemantic embedding of the true class, and opposite to the other classes. Thisallows efficient and class-adaptive information transfer from seen to unseenclasses. In addition, we propose an automatic process for selection of the mostuseful seen classes for each unseen class to achieve robustness in zero-shotlearning. Our method can update the unseen CAPD taking the advantages of fewunseen images to work in a few-shot learning scenario. Furthermore, our methodcan generalize the seen CAPDs by estimating seen-unseen diversity thatsignificantly improves the performance of generalized zero-shot learning. Ourextensive evaluations demonstrate that the proposed approach consistentlyachieves superior performance in zero-shot, generalized zero-shot andfew/one-shot learning problems.
展开▼